2 research outputs found

    Designing 3D scenarios and interaction tasks for immersive environments

    Get PDF
    In the world of today, immersive reality such as virtual and mixed reality, is one of the most attractive research fields. Virtual Reality, also called VR, has a huge potential to be used in in scientific and educational domains by providing users with real-time interaction or manipulation. The key concept in immersive technologies to provide a high level of immersive sensation to the user, which is one of the main challenges in this field. Wearable technologies play a key role to enhance the immersive sensation and the degree of embodiment in virtual and mixed reality interaction tasks. This project report presents an application study where the user interacts with virtual objects, such as grabbing objects, open or close doors and drawers while wearing a sensory cyberglove developed in our lab (Cyberglove-HT). Furthermore, it presents the development of a methodology that provides inertial measurement unit(IMU)-based gesture recognition. The interaction tasks and 3D immersive scenarios were designed in Unity 3D. Additionally, we developed an inertial sensor-based gesture recognition by employing an Long short-term memory (LSTM) network. In order to distinguish the effect of wearable technologies in the user experience in immersive environments, we made an experimental study comparing the Cyberglove-HT to standard VR controllers (HTC Vive Controller). The quantitive and subjective results indicate that we were able to enhance the immersive sensation and self embodiment with the Cyberglove-HT. A publication resulted from this work [1] which has been developed in the framework of the R&D project Human Tracking and Perception in Dynamic Immersive Rooms (HTPDI

    Human-Aware Collaborative Robots in the Wild: Coping with Uncertainty in Activity Recognition

    No full text
    This study presents a novel approach to cope with the human behaviour uncertainty during Human-Robot Collaboration (HRC) in dynamic and unstructured environments, such as agriculture, forestry, and construction. These challenging tasks, which often require excessive time, labour and are hazardous for humans, provide ample room for improvement through collaboration with robots. However, the integration of humans in-the-loop raises open challenges due to the uncertainty that comes with the ambiguous nature of human behaviour. Such uncertainty makes it difficult to represent high-level human behaviour based on low-level sensory input data. The proposed Fuzzy State-Long Short-Term Memory (FS-LSTM) approach addresses this challenge by fuzzifying ambiguous sensory data and developing a combined activity recognition and sequence modelling system using state machines and the LSTM deep learning method. The evaluation process compares the traditional LSTM approach with raw sensory data inputs, a Fuzzy-LSTM approach with fuzzified inputs, and the proposed FS-LSTM approach. The results show that the use of fuzzified inputs significantly improves accuracy compared to traditional LSTM, and, while the fuzzy state machine approach provides similar results than the fuzzy one, it offers the added benefits of ensuring feasible transitions between activities with improved computational efficiency
    corecore